The 2026 AI Safety Mandate: 3 Hours to Takedown
The Ministry of Electronics and IT (MeitY) has moved from "Advisories" to "Action." As of February 20, 2026, the Information Technology Amendment Rules 2026 are live, creating the strictest deepfake governance framework in the Global South.
1. The "Golden 3-Hour" Rule
Under the new rules, if a court or a government agency flags a deepfake—especially those involving political figures or national security—platforms like X (Twitter), WhatsApp, and Instagram now have only 3 hours to disable access.
Previously: Platforms had 36 hours. This massive compression is designed to stop "Viral Misinformation" before it can sway public opinion during local elections.
2. Mandatory "SGI" Labeling
The law formally defines Synthetic Generated Information (SGI). Any media—be it audio, video, or image—that has been artificially altered must carry:
- A prominent visible watermark (covering at least 10% of the screen for videos).
- Immutable Metadata: A digital fingerprint that stays with the file even if it is downloaded and re-uploaded.
- Provenance Tracking: The ability for authorities to trace the content back to the original AI tool used to create it.
3. Loss of "Safe Harbour" Protection
This is the biggest blow to tech giants. If a platform fails to remove a reported deepfake within the 3-hour window, they lose their Section 79 Safe Harbour immunity. This means the platform itself can be sued or held criminally liable for the content posted by its users.
What this means for SkillPlusHub readers:
If you see a suspicious video of a leader or a celebrity, you can now report it through the National Cyber Crime Reporting Portal. The 2026 rules ensure that the "Digital Police" have the teeth to act instantly. Stay safe, stay verified!

